107 research outputs found

    Exchangeable Variable Models

    Full text link
    A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations. We introduce exchangeable variable models (EVMs) as a novel class of probabilistic models whose basic building blocks are partially exchangeable sequences, a generalization of exchangeable sequences. We prove that a family of tractable EVMs is optimal under zero-one loss for a large class of functions, including parity and threshold functions, and strictly subsumes existing tractable independence-based model families. Extensive experiments show that EVMs outperform state of the art classifiers such as SVMs and probabilistic models which are solely based on independence assumptions.Comment: ICML 201

    Lifted Probabilistic Inference: An MCMC Perspective

    Get PDF
    The general consensus seems to be that lifted inference is concerned with exploiting model symmetries and grouping indistinguishable objects at inference time. Since first-order probabilistic formalisms are essentially tem- plate languages providing a more compact representation of a corresponding ground model, lifted inference tends to work especially well in these models. We show that the notion of indistinguishability manifests itself on several dferent levels {the level of constants, the level of ground atoms (variables), the level of formulas (features), and the level of assignments (possible worlds). We discuss existing work in the MCMC literature on ex- ploiting symmetries on the level of variable assignments and relate it to novel results in lifted MCMC

    LRMM: Learning to Recommend with Missing Modalities

    Full text link
    Multimodal learning has shown promising performance in content-based recommendation due to the auxiliary user and item information of multiple modalities such as text and images. However, the problem of incomplete and missing modality is rarely explored and most existing methods fail in learning a recommendation model with missing or corrupted modalities. In this paper, we propose LRMM, a novel framework that mitigates not only the problem of missing modalities but also more generally the cold-start problem of recommender systems. We propose modality dropout (m-drop) and a multimodal sequential autoencoder (m-auto) to learn multimodal representations for complementing and imputing missing modalities. Extensive experiments on real-world Amazon data show that LRMM achieves state-of-the-art performance on rating prediction tasks. More importantly, LRMM is more robust to previous methods in alleviating data-sparsity and the cold-start problem.Comment: 11 pages, EMNLP 201
    • …
    corecore